fair representation
- North America > Canada > Quebec > Montreal (0.04)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- Information Technology (0.68)
- Law (0.46)
- North America > United States > California (0.04)
- North America > Canada (0.04)
- Europe > Russia (0.04)
- Asia > Russia (0.04)
- North America > United States > California (0.04)
- North America > Canada (0.04)
- Europe > Switzerland > Zürich > Zürich (0.04)
- Law (1.00)
- Information Technology > Security & Privacy (0.93)
comments and concerns, all of which we will incorporate into the next version of our work
We thank the reviewers for their insightful feedback and encouraging words. Below, we address the reviewers' R1: Can you investigate the impact of robustly training the classifier on accuracy and certifiability? We will provide a more thorough investigation in the next revision. R2: How does your work compare with counterfactual and indirect fairness? R2: Can you extend your discussion of the framework from McNamara et al. [10]?
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- North America > United States (0.04)
- Asia > Middle East > Jordan (0.04)
Individual and group fairness in geographical partitioning
Ryzhov, Ilya O., Carlsson, John Gunnar, Zhu, Yinchu
Consider a service system in which individuals are served by facilities at different locations within a geographical region. For example, the facilities could represent schools, polling places, or commercial fulfillment centers. The geographical partitioning problem (Carlsson & Devulapalli 2013) divides the region into non-overlapping districts, such that all individuals residing in the same district are served by the same facility. The goal is to choose a partition that optimizes some measure of social welfare, most commonly the average travel cost per individual (Carlsson et al. 2016). We formulate and study a novel variant of this problem where the population is heterogeneous, consisting of multiple demographic groups, each with a different spatial distribution throughout the region. Again we optimize the expected cost, but now we also impose a new group fairness condition: each subpopulation can be neither over-nor under-represented at any facility. In other words, the districts are designed in such a way that the proportion of the population belonging to a particular group in any district must match that group's incidence in the entire population. This condition is also known as "demographic parity" in the literature (Dwork et al. 2012).
- North America > United States > North Carolina (0.04)
- North America > United States > New Jersey > Middlesex County > New Brunswick (0.04)
- North America > United States > California > Los Angeles County > Los Angeles (0.04)
- Law (1.00)
- Education (1.00)
- Transportation > Freight & Logistics Services (0.48)
- (2 more...)
Fair Representation Learning with Controllable High Confidence Guarantees via Adversarial Inference
Luo, Yuhong, Hoag, Austin, Wang, Xintong, Thomas, Philip S., Grabowicz, Przemyslaw A.
Representation learning is increasingly applied to generate representations that generalize well across multiple downstream tasks. Ensuring fairness guarantees in representation learning is crucial to prevent unfairness toward specific demographic groups in downstream tasks. In this work, we formally introduce the task of learning representations that achieve high-confidence fairness. We aim to guarantee that demographic disparity in every downstream prediction remains bounded by a *user-defined* error threshold $ε$, with *controllable* high probability. To this end, we propose the ***F**air **R**epresentation learning with high-confidence **G**uarantees (FRG)* framework, which provides these high-confidence fairness guarantees by leveraging an optimized adversarial model. We empirically evaluate FRG on three real-world datasets, comparing its performance to six state-of-the-art fair representation learning methods. Our results demonstrate that FRG consistently bounds unfairness across a range of downstream models and tasks.
- North America > United States > Massachusetts > Hampshire County > Amherst (0.14)
- North America > United States > New Jersey > Middlesex County > New Brunswick (0.04)
- North America > United States > California (0.04)
- (3 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Law (0.67)
- Information Technology > Security & Privacy (0.46)
FairContrast: Enhancing Fairness through Contrastive learning and Customized Augmenting Methods on Tabular Data
Tayebi, Aida, Yalabadi, Ali Khodabandeh, Yazdani-Jahromi, Mehdi, Garibay, Ozlem Ozmen
As AI systems become more embedded in everyday life, the development of fair and unbiased models becomes more critical. Considering the social impact of AI systems is not merely a technical challenge but a moral imperative. As evidenced in numerous research studies, learning fair and robust representations has proven to be a powerful approach to effectively debiasing algorithms and improving fairness while maintaining essential information for prediction tasks. Representation learning frameworks, particularly those that utilize self-supervised and contrastive learning, have demonstrated superior robustness and generalizability across various domains. Despite the growing interest in applying these approaches to tabular data, the issue of fairness in these learned representations remains underexplored. In this study, we introduce a contrastive learning framework specifically designed to address bias and learn fair representations in tabular datasets. By strategically selecting positive pair samples and employing supervised and self-supervised contrastive learning, we significantly reduce bias compared to existing state-of-the-art contrastive learning models for tabular data. Our results demonstrate the efficacy of our approach in mitigating bias with minimum trade-off in accuracy and leveraging the learned fair representations in various downstream tasks.
- North America > United States > California (0.04)
- North America > Canada (0.04)
- Europe > Switzerland > Zürich > Zürich (0.04)
- Law (1.00)
- Information Technology > Security & Privacy (0.93)
comments and concerns, all of which we will incorporate into the next version of our work
We thank the reviewers for their insightful feedback and encouraging words. Below, we address the reviewers' R1: Can you investigate the impact of robustly training the classifier on accuracy and certifiability? We will provide a more thorough investigation in the next revision. R2: How does your work compare with counterfactual and indirect fairness? R2: Can you extend your discussion of the framework from McNamara et al. [10]?